Description | : > [!IMPORTANT] > Microsoft has retired or limited facial recognition capabilities that can be used to try to infer emotional states and identity attributes which, if misused, can subject people to stereotyping, discrimination or unfair denial of services. The retired capabilities are emotion and gender. The limited capabilities are age, smile, facial hair, hair and makeup. Email [Azure Face API](mailto:azureface@microsoft.com) if you have a responsible use case that would benefit from the use of any of the limited capabilities. Read more about this decision [here](https://azure.microsoft.com/blog/responsible-ai-investments-and-safeguards-for-facial-recognition/). * * No image will be stored. Only the extracted face feature(s) will be stored on server. The faceId is an identifier of the face feature and will be used in "Identify", "Verify", and "Find Similar". The stored face features will expire and be deleted at the time specified by faceIdTimeToLive after the original detection call. * Optional parameters include faceId, landmarks, and attributes. Attributes include headPose, glasses, occlusion, accessories, blur, exposure, noise, mask, and qualityForRecognition. Some of the results returned for specific attributes may not be highly accurate. * JPEG, PNG, GIF (the first frame), and BMP format are supported. The allowed image file size is from 1KB to 6MB. * The minimum detectable face size is 36x36 pixels in an image no larger than 1920x1080 pixels. Images with dimensions higher than 1920x1080 pixels will need a proportionally larger minimum face size. * Up to 100 faces can be returned for an image. Faces are ranked by face rectangle size from large to small. * For optimal results when querying "Identify", "Verify", and "Find Similar" ('returnFaceId' is true), please use faces that are: frontal, clear, and with a minimum size of 200x200 pixels (100 pixels between eyes). * Different 'detectionModel' values can be provided. The availability of landmarks and supported attributes depends on the detection model specified. To use and compare different detection models, please refer to [here](https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/specify-detection-model). * Different 'recognitionModel' values are provided. If follow-up operations like "Verify", "Identify", "Find Similar" are needed, please specify the recognition model with 'recognitionModel' parameter. The default value for 'recognitionModel' is 'recognition_01', if latest model needed, please explicitly specify the model you need in this parameter. Once specified, the detected faceIds will be associated with the specified recognition model. More details, please refer to [here](https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/specify-recognition-model). |
Reference | : Link ¶ |
⚼ Request
POST:
/detect
{
detectionModel:
string
,
recognitionModel:
string
,
returnFaceId:
boolean
,
returnFaceAttributes:
array
,
returnFaceLandmarks:
boolean
,
returnRecognitionModel:
boolean
,
faceIdTimeToLive:
integer
,
body:
}
{
,
url:
string
,
}
⚐ Response (200)
{
faceId:
string
,
recognitionModel:
enum
,
faceRectangle:
{
,
top:
integer
,
left:
integer
,
width:
integer
,
height:
integer
,
}
faceLandmarks:
{
,
pupilLeft:
{
,
x:
number
,
y:
number
,
}
pupilRight:
{
,
x:
number
,
y:
number
,
}
noseTip:
{
,
x:
number
,
y:
number
,
}
mouthLeft:
{
,
x:
number
,
y:
number
,
}
mouthRight:
{
,
x:
number
,
y:
number
,
}
eyebrowLeftOuter:
{
,
x:
number
,
y:
number
,
}
eyebrowLeftInner:
{
,
x:
number
,
y:
number
,
}
eyeLeftOuter:
{
,
x:
number
,
y:
number
,
}
eyeLeftTop:
{
,
x:
number
,
y:
number
,
}
eyeLeftBottom:
{
,
x:
number
,
y:
number
,
}
eyeLeftInner:
{
,
x:
number
,
y:
number
,
}
eyebrowRightInner:
{
,
x:
number
,
y:
number
,
}
eyebrowRightOuter:
{
,
x:
number
,
y:
number
,
}
eyeRightInner:
{
,
x:
number
,
y:
number
,
}
eyeRightTop:
{
,
x:
number
,
y:
number
,
}
eyeRightBottom:
{
,
x:
number
,
y:
number
,
}
eyeRightOuter:
{
,
x:
number
,
y:
number
,
}
noseRootLeft:
{
,
x:
number
,
y:
number
,
}
noseRootRight:
{
,
x:
number
,
y:
number
,
}
noseLeftAlarTop:
{
,
x:
number
,
y:
number
,
}
noseRightAlarTop:
{
,
x:
number
,
y:
number
,
}
noseLeftAlarOutTip:
{
,
x:
number
,
y:
number
,
}
noseRightAlarOutTip:
{
,
x:
number
,
y:
number
,
}
upperLipTop:
{
,
x:
number
,
y:
number
,
}
upperLipBottom:
{
,
x:
number
,
y:
number
,
}
underLipTop:
{
,
x:
number
,
y:
number
,
}
underLipBottom:
}
{
,
x:
number
,
y:
number
,
}
faceAttributes:
}
{
,
age:
number
,
smile:
number
,
facialHair:
{
,
moustache:
number
,
beard:
number
,
sideburns:
number
,
}
glasses:
enum
,
headPose:
{
,
pitch:
number
,
roll:
number
,
yaw:
number
,
}
hair:
{
,
bald:
number
,
invisible:
boolean
,
hairColor:
}
[
]
,
{
,
color:
enum
,
confidence:
number
,
}
occlusion:
{
,
foreheadOccluded:
boolean
,
eyeOccluded:
boolean
,
mouthOccluded:
boolean
,
}
accessories:
[
]
,
{
,
type:
enum
,
confidence:
number
,
}
blur:
{
,
blurLevel:
enum
,
value:
number
,
}
exposure:
{
,
exposureLevel:
enum
,
value:
number
,
}
noise:
{
,
noiseLevel:
enum
,
value:
number
,
}
mask:
{
,
noseAndMouthCovered:
boolean
,
type:
enum
,
}
qualityForRecognition:
enum
,
}
⚐ Response (default)
{
$headers:
{
,
x-ms-error-code:
string
,
}
$schema:
}
{
,
error:
}
{
,
code:
string
,
message:
string
,
}